Goto

Collaborating Authors

 quantized random projection



Quantized Random Projections and Non-Linear Estimation of Cosine Similarity

Neural Information Processing Systems

Random projections constitute a simple, yet effective technique for dimensionality reduction with applications in learning and search problems. In the present paper, we consider the problem of estimating cosine similarities when the projected data undergo scalar quantization to $b$ bits. We here argue that the maximum likelihood estimator (MLE) is a principled approach to deal with the non-linearity resulting from quantization, and subsequently study its computational and statistical properties. A specific focus is on the on the trade-off between bit depth and the number of projections given a fixed budget of bits for storage or transmission. Along the way, we also touch upon the existence of a qualitative counterpart to the Johnson-Lindenstrauss lemma in the presence of quantization.


Random Projections with Asymmetric Quantization

Xiaoyun Li, Ping Li

Neural Information Processing Systems

The method of random projection has been a popular tool for data compression, similarity search, and machine learning. In many practical scenarios, applying quantization on randomly projected data could be very helpful to further reduce storage cost and facilitate more efficient retrievals, while only suffering from little loss in accuracy.


Reviews: Quantized Random Projections and Non-Linear Estimation of Cosine Similarity

Neural Information Processing Systems

I actually liked the paper quite a bit, but I do have at least a few concerns. First of all, it is very important in establishing the result that the approach to estimating the inner product between the observations is *not* simply taking the inner product between the observations, but by computing the MLE of the inner product. I see how the authors do this in the Gaussian case, but for this to be relevant in practice (on truly high-dimensional data) it seems that it would be important for this to be possible with other kinds of randomized embeddings (such as the results of the Fast Johnson-Lindenstrauss Transform). Some discussion about whether or not the techniques presented would be relevant in such a setting would be welcome. My other main concern is the manner in which the authors have brushed aside the issue of normalization.


Reviews: Simple strategies for recovering inner products from coarsely quantized random projections

Neural Information Processing Systems

Random projections are often used in learning tasks involving dimensionality reduction. The goal of the additional quantization step is data compression that allows for a reduction in space complexity of learning algorithms and more efficient communication in distributed settings.


Quantized Random Projections and Non-Linear Estimation of Cosine Similarity

Li, Ping, Mitzenmacher, Michael, Slawski, Martin

Neural Information Processing Systems

Random projections constitute a simple, yet effective technique for dimensionality reduction with applications in learning and search problems. In the present paper, we consider the problem of estimating cosine similarities when the projected data undergo scalar quantization to $b$ bits. We here argue that the maximum likelihood estimator (MLE) is a principled approach to deal with the non-linearity resulting from quantization, and subsequently study its computational and statistical properties. A specific focus is on the on the trade-off between bit depth and the number of projections given a fixed budget of bits for storage or transmission. Along the way, we also touch upon the existence of a qualitative counterpart to the Johnson-Lindenstrauss lemma in the presence of quantization.